YARN

In Hadoop version 1.0 which is also referred to as MRV1(Map Reduce Version 1), Map Reduce performed both processing and resource management functions. In Hadoop 1.x Architecture Job Tracker daemon was carrying the responsibility of Job scheduling and Monitoring as well as was managing resource across the cluster. It assigned map and reduce tasks on a number of subordinate processes called the Task Trackers. The Task Trackers periodically reported their progress to the Job Tracker.
MapReduce Version 1.0 - Hadoop YARN - Edureka

This design resulted in scalability bottleneck due to a single Job Tracker. IBM mentioned in its article that according to Yahoo!, the practical limits of such a design are reached with a cluster of 5000 nodes and 40,000 tasks running concurrently. Apart from this limitation, the utilization of computational resources is inefficient in MRV1. Also, the Hadoop framework became limited only to Map Reduce processing paradigm.

To overcome all these issues, YARN was introduced in Hadoop version 2.0 in the year 2012 by Yahoo and Hortonworks. The basic idea behind YARN is to relieve Map Reduce by taking over the responsibility of Resource Management and Job Scheduling. YARN started to give Hadoop the ability to run non-Map Reduce jobs within the Hadoop framework.

Introduction to Hadoop YARN

YARN allows different data processing methods like graph processing, interactive processing, stream processing as well as batch processing to run and process data stored in HDFS. Therefore YARN opens up Hadoop to other types of distributed applications beyond MapReduce.

Hadoop v1.0 vs Hadoop v2.0 - Hadoop YARN - Edureka


Architecture consists of the following main components :

  1. Resource ManagerRuns on a master daemon and manages the resource allocation in the cluster.
  2. Node Manager: They run on the slave daemons and are responsible for the execution of a task on every single Data Node.
  3. Application Master: Manages the user job lifecycle and resource needs of individual applications. It works along with the Node Manager and monitors the execution of tasks.
  4. Container: Package of resources including RAM, CPU, Network, HDD etc on a single node.

YARN Components

The image below represents the YARN Architecture.
Components of YARN - Hadoop YARN - Edureka

Resource Manager

  • It is the ultimate authority in resource allocation.
  • This daemon process resides on the Master Node (runs along with Resource Manager daemon ).
  • On receiving the processing requests, it passes parts of requests to corresponding node managers accordingly, where the actual processing takes place.
  • It is the arbitrator of the cluster resources and decides the allocation of the available resources for competing applications.
  • Optimizes the cluster utilization like keeping all resources in use all the time against various constraints such as capacity guarantees, fairness, and SLAs.

Scheduler

  • The scheduler is responsible for allocating resources to the various running applications subject to constraints of capacities, queues etc.
  • It is called a pure scheduler in Resource Manager, which means that it does not perform any monitoring or tracking of status for the applications.
  • If there is an application failure or hardware failure, the Scheduler does not guarantee to restart the failed tasks.
  • Performs scheduling based on the resource requirements of the applications.
  • It has a pluggable policy plug-in, which is responsible for partitioning the cluster resources among the various applications. There are two such plug-ins: Capacity Scheduler and Fair Scheduler, which are currently used as Schedulers in Resource Manager.

Application Manager

  • It is responsible for accepting job submissions.
  • Negotiates the first container from the Resource Manager for executing the application specific Application Master.
  • Manages running the Application Masters in a cluster and provides service for restarting the Application Master container on failure.

Node Manager

  • It takes care of individual nodes in a Hadoop cluster and manages user jobs and workflow on the given node.
  • It registers with the Resource Manager and sends heartbeats with the health status of the node.
  • Its primary goal is to manage application containers assigned to it by the resource manager.
  • It keeps up-to-date with the Resource Manager.
  • Application Master requests the assigned container from the Node Manager by sending it a Container Launch Context(CLC) which includes everything the application needs in order to run. The Node Manager creates the requested container process and starts it.
  • Monitors resource usage (memory, CPU) of individual containers.
  • Performs Log management.
  • It also kills the container as directed by the Resource Manager.

Application Master

  • An application is a single job submitted to the framework. Each such application has a unique Application Master associated with it which is a framework specific entity.
  • It is the process that coordinates an application’s execution in the cluster and also manages faults.
  • Its task is to negotiate resources from the Resource Manager and work with the Node Manager to execute and monitor the component tasks.
  • It is responsible for negotiating appropriate resource containers from the Resource Manager, tracking their status and monitoring progress.
  • Once started, it periodically sends heartbeats to the Resource Manager to affirm its health and to update the record of its resource demands.

What is Container

  • It is a collection of physical resources such as RAM, CPU cores, and disks on a single node.
  • YARN containers are managed by a container launch context which is container life-cycle(CLC). This record contains a map of environment variables, dependencies stored in a remotely accessible storage, security tokens, payload for Node Manager services and the command necessary to create the process.
  • It grants rights to an application to use a specific amount of resources (memory, CPU etc.) on a specific host.

What is Container

  • Submit the job
  • Get Application ID
  • Application Submission Context
  • Start Container Launch
  • Launch Application Master
  • Allocate Resources
  • Container
  • Launch
  • Execute
Application Submission - Hadoop YARN - Edureka

Application Workflow in Hadoop YARN

Refer to the given image and see the following steps involved in Application workflow of Apache Hadoop YARN:

  1. Client submits an application.
  2. Resource Manager allocates a container to start Application Manager.
  3. Application Manager registers with Resource Manager.
  4. Application Manager asks containers from Resource Manager.
  5. Application Manager notifies Node Manager to launch containers.
  6. Application code is executed in the container.
  7. Client contacts Resource Manager/Application Manager to monitor application’s status.
  8. Application Manager UN-registers with Resource Manager.

Application Workflow - Hadoop YARN - Edureka


No comments:

Post a Comment